Comparative analysis on cross-modal information retrieval: A review

نویسندگان

چکیده

Human beings experience life through a spectrum of modes such as vision, taste, hearing, smell, and touch. These multiple are integrated for information processing in our brain using complex network neuron connections. Likewise artificial intelligence to mimic the human way learning evolve into next generation, it should elucidate multi-modal fusion efficiently. Modality is channel that conveys about an object or event image, text, video, audio. A research problem said be when incorporates from more than single modality. Multi-modal systems involve one mode data inquired any (same varying) modality outcome whereas cross-modal system strictly retrieves dissimilar As input–output queries belong diverse modal families, their coherent comparison still open challenge with primitive forms subjective definition content similarity. Numerous techniques have been proposed by researchers handle this issue reduce semantic gap retrieval among different modalities. This paper focuses on comparative analysis various works field retrieval. Comparative several representations results state-of-the-art methods applied benchmark datasets also discussed. In end, issues presented enable better understanding present scenario identify future directions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cross-Modal Information Retrieval - A Case Study on Chinese Wikipedia

Probability models have been used in cross-modal multimedia information retrieval recently by building conjunctive models bridging the text and image components. Previous studies have shown that cross-modal information retrieval system using the topic correlation model (TCM) outperforms state-of-the-art models in English corpus. In this paper, we will focus on the Chinese language, which is dif...

متن کامل

A Comprehensive Survey on Cross-modal Retrieval

In recent years, cross-modal retrieval has drawn much attention due to the rapid growth of multimodal data. It takes one type of data as the query to retrieve relevant data of another type. For example, a user can use a text to retrieve relevant pictures or videos. Since the query and its retrieved results can be of different modalities, how to measure the content similarity between different m...

متن کامل

A Review on the Cross and Multilingual Information Retrieval

In this paper we explore some of the most important areas of information retrieval. In particular, Crosslingual Information Retrieval (CLIR) and Multilingual Information Retrieval (MLIR). CLIR deals with asking questions in one language and retrieving documents in different language. MLIR deals with asking questions in one or more languages and retrieving documents in one or more different lang...

متن کامل

Cross-Modal Manifold Learning for Cross-modal Retrieval

This paper presents a new scalable algorithm for cross-modal similarity preserving retrieval in a learnt manifold space. Unlike existing approaches that compromise between preserving global and local geometries, the proposed technique respects both simultaneously during manifold alignment. The global topologies are maintained by recovering underlying mapping functions in the joint manifold spac...

متن کامل

Cross-Modal Retrieval: A Pairwise Classification Approach

Content is increasingly available in multiple modalities (such as images, text, and video), each of which provides a different representation of some entity. The cross-modal retrieval problem is: given the representation of an entity in one modality, find its best representation in all other modalities. We propose a novel approach to this problem based on pairwise classification. The approach s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computer Science Review

سال: 2021

ISSN: ['1876-7745', '1574-0137']

DOI: https://doi.org/10.1016/j.cosrev.2020.100336